e-Learn @ SASTRA Back

ML Introduction

ANN - Introduction 

Perceptrons - Example -Tutorial

Multilayer Networks

Backpropagation Algorithm & Example

Derivation of Backprogation

Remarks of BPN & An illustrative example 

Tutorial - Gradient Descent

Evaluating Hypotheses: Motivation , Accuracy , A general approach for Deriving Confidence Intervals, Difference in Error of Two Hypotheses, Comparing learning algorithms. 

Bayesian Belief Network 

Bayesian Belief Network 

Analytical Learning-Learning with perfect domain theories , Explanation Based Learning of search control knowledge 

Combining Inductive and Analytical Learning: Motivation - Inductive Analytical approaches to learning 

using prior knowledge to initialize the Hypothesis -EBNN & Tangent Prop

FOCL

Machine Learning Basics: Learning algorithms- capacity, overfitting and underfitting

hyper parameters and validation sets 

estimators, bias and variance 

Maximum Likelihood Estimation 

Bayesian statistics 

supervised learning algorithms, usupervised learning algorithms, stochastic gradient descent - building machine learning algorithms 

challenges Motivating Deep Learning, Deep Feed forward Networks: Example: Learning XOR Gradient-based Learning  Hidden units 

Regularization for Deep Learning: Parameter Norm Penalties - Norm Penalties as Constrained optimization, Regularization and Under - Constrained problems  Dataset Augmentation - Noise Robustness 

Semi supervised Learning - Multi-task Learning - Early stopping -Parameter tying and parameter sharing - sparse representations 

Bagging and other Ensemble methods, dropout 

Optimization for training Deep Models: How Learning Differs from Pure Optimization, challenges in Neural Network optimization - basic Algorithms, parameter initialization strategies, Algorithm with adaptive learning rates, approximate second-order methods, optimization strategies and meta-algorithms, Applications